A delay differential equation is one whose derivatives are functions of the dependent variable at a different value of the independent variable. Typically they are integrated numerically by providing a history function of the same length as the delay. This presentation will consider equations that can be solved exactly without the need for any history function.

Consider first the equation

f(t) =af(t-T)

where T is a constant displacement. The solution to the equation without displacement, T=0 , is a simple exponential function. This function has the special feature of separability when its argument is the sum of two values. When solving the equation with displacement, the exponential in the independent variable will cancel from both sides leaving an exponential of the displacement on the right-hand side. The remaining transcendental equation is then soluble by the Lambert W function.

Assuming a solution of the form ect , one has

T cect =aec(t -T) cTecT =aT c=W(aT) T

where the left-hand side has been arranged to match that of the defining equation of the Lambert function. The final solution to the differential equation is

f(t) =exp[ W(aT) Tt]

which is easily verified. The branch of the Lambert function is left unspecified here, but if one wants to recover the expected solution as the delay goes to zero

f(t) =eat , T0

then one can only use the principal branch, which is linear around the origin, since the other branches are all singular there. It should be noted than only W0 and W1 have portions of their domain where they remain real, so that in general this solution contains oscillatory behavior.

For a=1 here is how the principal branch of the solution compares to an exponential:

This is just a simple scaling of the exponential because the combination W(T)/T decreases monotonically from unity as T increases.

The solution generalizes easily to higher-order equations. Given the equation

f(n) (t) =af(t-T)

assume the same solution form and rearrange as before:

cn ect =aec(t -T) cTn ecT/n =Tn a1/n c=nT W(Tn a1/n)

The n independent solutions to this equation are given by introducing nth roots of unity multiplying the coefficient in the exponential,

fk(t) =exp[ e2πik /n nT W(Tn a1/n) t] , k=1,2,n

which generally leads to oscillatory solutions for kn .


Now consider an equation with the constant displacement in the opposite direction:

f(t) =af(t+T)

Following the procedure as before gives

cect =aec(t +T) cT ecT =aT c=W( aT)T

with the solution

f(t) =exp[ W( aT) Tt]

Since the principal branch of the Lambert function becomes complex when its function argument is less than e1 , this solution will become oscillatory for large enough delay. This is a marked difference from the first equation.

For a=1 here is how the real (blue) and imaginary (purple) parts of the principal branch of the solution compare to an exponential (red):

The behavior changes significantly as already noted when T>e1 0.368  .

The solution again generalizes easily to higher-order equations. Given the equation

f(n) (t) =af(t+T)

again assume the same solution form and rearrange:

cn ect =aec(t +T) cTn ecT/n =Tn a1/n c=nT W(Tn a1/n)

The n independent solutions to this equation are again given with nth roots of unity,

fk(t) =exp[ e2πik /n nT W(Tn a1/n) t] , k=1,2,n

with the same general behavior.


Now consider an equation with equal delays in both directions:

f(t) =af(t-T) +bf(t+T)

This equation again has an exponential solution, but the equation determining the coefficient in the exponential is no longer soluble with the Lambert function:

c=ae cT +becT

One can however define a new double Lambert function as the solution to the nonlinear transcendental equation

W(x,y) =xeW (x,y) +yeW (x,y)

The details of this function are considered in another presentation. The solution to the differential equation in terms of this double Lambert function is

f(t) =exp[ W(aT,bT) Tt]

Here is how the real (blue) and imaginary (purple) parts of the principal branch of the solution compare to an exponential (red) for a variety of coefficient values:

This solution does not generalize simply to higher-order equations of the form

f(n) (t) =af(t-T) +bf(t+T)

because the equation determining the coefficient in the exponential is no longer soluble with the double Lambert function,

cn=ae cT +becT

since one cannot easily take an nth root of the linear combination of exponentials. Instead it appears one would have to define another new function that extends the double Lambert function.

The same appears to be true for an equation with unequal delays, such as

f(t) =af(t -T1) +bf(t +T2)

The equation determining the coefficient in the exponential is in this case

c=ae cT1 +becT2

which again is not soluble with the double Lambert function as defined. For any mixture of delay values and signs, one would have to adapt the numerical inversion used for the double Lambert function to each individual case.


To understand the limits of analytic solutions to delay equations, consider

f (t) =a[ f(t-T) ]n

where n is an arbitrary value. The equation without delay is soluble by a binomial form:

f=afn f fn =a f1-n 1-n =at+C f=[ (1-n)at +f0 1-n ]1 1-n

It is easy to verify that this expression is a solution of the equation without delay. The expression represents a generalization of the exponential function to arbitrary powers of the right-hand side. From the limit definition of the exponential one has

limn1 [(1-n) at +f0 1-n ]1 1-n =limn1 f0 [1 +(1-n) atf0 1-n ]1 1-n =f0 eat

for consistency. Unfortunately this expression does not have the special feature of separability of the exponential mentioned above, so a complete solution to the equation under consideration is not immediately available. Indeed, to see how complicated this equation is try a power series expansion to first order in T of its right-hand side:

f =a[f -Tf +]n a[fn -nTf fn-1] f fn +anT ff =a f1-n 1-n +anTlnf =at+C f1-n anT +lnf 1-n =1-n nT t +f0 1-n anT +lnf0 1-n f1-n anT exp(f 1-n anT) =f0 1-n anT exp(f0 1-n anT) exp(1-n nTt)

Now recognizing that the combination zez is the inverse of the Lambert W function, one has

f(t) [ anT W[ W1( f0 1-n anT) exp(1-n nT t)] ] 11-n

It should be emphasized that this is not an exact solution to the equation under consideration. The next approximation with a power series expansion to second order in T does not appear easily soluble. The binomial expression can be recovered from this one by setting W(z) lnz and W1 (z) ez  , although these approximations are not exact asymptotically.

Seeing this level of complexity from a first-order approximation gives one pause. Clearly some other analytic approach is needed.


One approach is Padé approximants, which are ratios of polynomials constructed to match the power series of a function to a given order. They are very often more accurate than the original power series, approximating the original function outside of the radius of convergence of the power series. This nature of stability makes them more useful for approximating unknown solutions to differential equations as well.

Begin with the simplest so-called diagonal approximant

f(t) f0 +a1t 1+b1t

For all equations in this presentation, the coefficient a represents a scaling of quantities with a dimension of time according to the order of differentiation in each equation. As such it can easily be restored if needed, so will be omitted in this section. Putting the approximant in the general delay differential equation under consideration, one has

f(t) a1 -b1f0 (1+b1t )2 ( f0 -a1T +a1t 1-b1T +b1t )n [ f(t-T) ]n

Since there are only two constants to determine, expand each side to first order in the temporal variable:

(a1 -b1f0) (1 -2b1t) (f0 -a1T 1-b1T )n (1 +na1 f0 -a1T t -nb1 1-b1T t)

Equating coefficients gives the two equations

a1 -b1f0 =(f0 -a1T 1-b1T )n

nb1 1-b1T -2b1 =na1 f0 -a1T

While these equations are not particularly formidable, they cannot be solved exactly for an arbitrary power n, but one can naturally perform a numerical search for the root closest to the origin. A bit of numerical experimentation reveals that b1 is always negative, so the search should start with that in mind. Taking f0=1 for convenience, the result is an approximation function that looks like this:

The function itself does not provide any sense of its accuracy as an approximation. A better indication is plotting the two sides of the differential equation above to see how well they match. With the left-hand derivative in red and the right-hand delayed function in purple, the result is

This first simple approximation is clearly of limited usefulness, but this is hardly unexpected.

The process can be extended with higher-order diagonal approximants for a more accurate and useful result. Consider the second-order diagonal approximant

f(t) f0 +a1t +a2t2 1+b1t +b2t2

Again set its derivative approximately equal to to the delayed power:

a1 -b1f0 +2(a2 -b2 f0)t +(a2b1 -a1b2) t2 (1+b1t +b2t2 )2 (f0 -a1T +a2T2 +(a1 -2a2T)t +a2t2 1-b1T +b2T2 +(b1 -2b2T)t +b2t2 )n

With the abbreviations

C0 =a1 -b1f0 C1 =2(a2 -b2 f0) C0 C2 =a2b1 -a1b2 C0 A0 =f0 -a1T +a2T2 A1 =a1 -2a2T A0 A2 =a2A0   B0 =1-b1T +b2T2 B1 =b1 -2b2T B0 B2 =b2B0

things are a bit more manageable:

C01 +C1t +C2 t2 (1 +b1t +b2t2 )2 (A0 B0 )n (1 +A1t +A2t2 1 +B1t +B2t2 )n

Setting t=0 gives a nonlinear equation analogous to the first one in the previous approximation,

C0 =(A0 B0 )n

which may then be canceled from the two sides of the equation. Since the Taylor series to the same order of two equal functions are necessarily identical, it is algebraically simpler at this point to rewrite the equation in the equivalent form

(1 +C1t +C2 t2) (1 +B1t +B2t2 )n (1 +b1t +b2t2 )2 (1 +A1t +A2t2 )n

In order to obtain three more equations to determine the four constants, expand each side to third order in the temporal variable. The two arbitrary powers have the same form under the rearrangement,

(1+A1t +A2t2 )n 1 +n(A1t +A2t2) +n(n-1) 2 (A1t +A2t2 )2 +n(n-1) (n-2)6 (A1t +A2t2 )3 1+nA1t +[nA2 +n(n-1) 2A12 ]t2 +[n (n-1) A1A2 +n(n-1) (n-2)6 A13 ]t3

while the squared binomial to the same order is

(1+b1t +b2 t2 )2 1+2b1t +(b12 +2b2) t2 +2b1b2 t3

One merely need now equate coefficients of powers for the determining equations:

nB1 +C1 =nA1 +2b1

[nB2 +n(n-1) 2B12 ] +nB1C1 +C2 =[nA2 +n(n-1) 2A12] +2nb1 A1 +b12 +2b2

[n(n-1) B1B2 +n(n-1) (n-2)6 B13] +C1[ nB2 +n(n-1) 2B12 ] +nB1C2 =[ n(n-1) A1A2 +n(n-1) (n-2)6 A13] +2b1[ nA2 +n(n-1) 2A12 ] +nA1 (b12 +2b2) +2b1b2

Lovely! Clearly more complicated than the previous approximation, but still easily tractable numerically. The resulting function, again with f0=1 , looks similar to the previous case,

but again the function itself does not provide the information contained in the differential equation. With the left-hand derivative in red and the right-hand delayed function in purple, those two sides in comparison are

This is clearly a much better approximation than before, without an excess of added complexity. The approximation can be improved upon with a third- or fourth-order approximant, with a corresponding increase in algebraic complexity.

Padé approximants are really quite magical!


Uploaded 2021.04.10 — Updated 2022.12.17 analyticphysics.com